Petrobras’ HPC Infrastructure for Energy

On Thursday, June 29, Guilherme Vilela of Petrobras, one of the world’s leading energy companies, gave an online talk in conjunction with the Society of HPC Professionals’ Lunch and Learn program.

July 10, 2023 /

Sarah F. Hill


 HPC clusters generic photo

Vilela works as IT HPC advisor at Petrobras where his responsibilities include defining, implementing, and troubleshooting software and hardware infrastructure relating to Linux clusters for seismic data processing.

Petrobras relies heavily on high-performance computing (HPC) infrastructure for various workloads, ranging from seismic processing to reservoir simulation, and more recently, AI. Today it operates several TOP500-ranked supercomputers, including the Pegasus and Dragão systems. The presentation provided an overview of Petrobras’ HPC infrastructure history.

Seismic processing helps exploration companies find the oil and at Petrobras, a few big jobs with very large datasets run on CPU and GPUs for weeks at a time. Vilela emphasized that important imaging algorithms used GPUs and are developed in-house. Reservoir Simulation, on the other hand, helps to extract oil more efficiently from wells. These do not use GPUs and have much smaller datasets. Many smaller, simultaneous jobs are run for this type of application. To extract oil from deep water wells costs about $50 million. 300 offshore wells are positioned to be drained in the next five years, making it a $10 billion endeavor – cementing the need for a robust HPC framework.

Petrobras has come a long way since its first IBM supercomputing cluster in 1969. In 2012 grifo 04 was the most impressive supercomputer in the entire Latin American energy market. The price of oil tanked, and by 2018 the HPC resources at Petrobras were rendered nearly obsolete.

Since then, Petrobras invested heavily in its infrastructure and has once again made TOP500 and GREEN 500 lists for the past four years. Albacora— 448 nodes with 64 cores— is a cluster that has been introduced just this past year for reservoir simulation. Petrobras now how more than 20 times more HPC capacity than in 2018. On the seismic side of things, Phoenix, Atlas, Dragon and Pegasus have been added.

Vilela also discussed how the cloud is changing the landscape. While the cloud is not a great fit for seismic applications, the HPC ecosystem will definitely mix on-premises and cloud-based computing for Machine Learning in the coming years.

A lively question and answer period ensued and Vilela was thanked for his overview of the oil and gas HPC applications that Petrobras employs.


News Category
Events
Institute Happenings
UH Data Science News
Research Topics